13 research outputs found

    On an unsupervised method for parameter selection for the elastic net

    Get PDF
    Despite recent advances in regularization theory, the issue of parameter selection still remains a challenge for most applications. In a recent work the framework of statistical learning was used to approximate the optimal Tikhonov regularization parameter from noisy data. In this work, we improve their results and extend the analysis to the elastic net regularization. Furthermore, we design a data-driven, automated algorithm for the computation of an approximate regularization parameter. Our analysis combines statistical learning theory with insights from regularization theory. We compare our approach with state-of-the-art parameter selection criteria and show that it has superior accuracy

    Unsupervised Knowledge-Transfer for Learned Image Reconstruction

    Get PDF
    Deep learning-based image reconstruction approaches have demonstrated impressive empirical performance in many imaging modalities. These approaches generally require a large amount of high-quality training data, which is often not available. To circumvent this issue, we develop a novel unsupervised knowledge-transfer paradigm for learned iterative reconstruction within a Bayesian framework. The proposed approach learns an iterative reconstruction network in two phases. The first phase trains a reconstruction network with a set of ordered pairs comprising of ground truth images and measurement data. The second phase fine-tunes the pretrained network to the measurement data without supervision. Furthermore, the framework delivers uncertainty information over the reconstructed image. We present extensive experimental results on low-dose and sparse-view computed tomography, showing that the proposed framework significantly improves reconstruction quality not only visually, but also quantitatively in terms of PSNR and SSIM, and is competitive with several state-of-the-art supervised and unsupervised reconstruction techniques

    An Investigation of Stochastic Variance Reduction Algorithms for Relative Difference Penalised 3D PET Image Reconstruction

    Get PDF
    Penalised PET image reconstruction algorithms are often accelerated during early iterations with the use of subsets. However, these methods may exhibit limit cycle behaviour at later iterations due to variations between subsets. Desirable converged images can be achieved for a subclass of these algorithms via the implementation of a relaxed step size sequence, but the heuristic selection of parameters will impact the quality of the image sequence and algorithm convergence rates. In this work, we demonstrate the adaption and application of a class of stochastic variance reduction gradient algorithms for PET image reconstruction using the relative difference penalty and numerically compare convergence performance to BSREM. The two investigated algorithms are: SAGA and SVRG. These algorithms require the retention in memory of recently computed subset gradients, which are utilised in subsequent updates. We present several numerical studies based on Monte Carlo simulated data and a patient data set for fully 3D PET acquisitions. The impact of the number of subsets, different preconditioners and step size methods on the convergence of regions of interest values within the reconstructed images is explored. We observe that when using constant preconditioning, SAGA and SVRG demonstrate reduced variations in voxel values between subsequent updates and are less reliant on step size hyper-parameter selection than BSREM reconstructions. Furthermore, SAGA and SVRG can converge significantly faster to the penalised maximum likelihood solution than BSREM, particularly in low count data

    Continuous Parabolic Molecules

    No full text

    On an unsupervised method for parameter selection for the elastic net

    Get PDF
    Despite recent advances in regularization theory, the issue of parameter selection still remains a challenge for most applications. In a recent work the framework of statistical learning was used to approximate the optimal Tikhonov regularization parameter from noisy data. In this work, we improve their results and extend the analysis to the elastic net regularization. Furthermore, we design a data-driven, automated algorithm for the computation of an approximate regularization parameter. Our analysis combines statistical learning theory with insights from regularization theory. We compare our approach with state-of-the-art parameter selection criteria and show that it has superior accuracy

    Stochastic gradient descent for linear inverse problems in variable exponent Lebesgue spaces

    No full text
    International audienceWe consider a stochastic gradient descent (SGD) algorithm for solving linear inverse problems (e.g., CT image reconstruction) in the Banach space framework of variable exponent Lebesgue spaces ppnq pRq. Such non-standard spaces have been recently proved to be the appropriate functional framework to enforce pixel-adaptive regularisation in signal and image processing applications. Compared to its use in Hilbert settings, however, the application of SGD in the Banach setting of ppnq pRq is not straightforward, due, in particular to the lack of a closed-form expression and the non-separability property of the underlying norm. In this manuscript, we show that SGD iterations can effectively be performed using the associated modular function. Numerical validation on both simulated and real CT data show significant improvements in comparison to SGD solutions both in Hilbert and other Banach settings, in particular when non-Gaussian or mixed noise is observed in the data
    corecore